# Research Use

3b Zh Ft Research Release Q8 0 GGUF
Apache-2.0
This model is converted from canopylabs/3b-zh-ft-research_release into GGUF format, suitable for Chinese text generation tasks.
Large Language Model Chinese
3
cludyw
20
0
3b Zh Ft Research Release Q4 K M GGUF
Apache-2.0
This is a Chinese language model converted from canopylabs/3b-zh-ft-research_release to GGUF format, suitable for text generation tasks.
Large Language Model Chinese
3
freddyaboulton
142
1
Pile T5 Base
Pile-T5 Base is an encoder-decoder model trained on The Pile dataset using the T5x library, trained for 2 million steps with MLM objective, approximately 2 trillion tokens.
Large Language Model Transformers English
P
EleutherAI
50
19
Yi Ko 6B
Apache-2.0
Yi-Ko-6B is an advanced version of the 01-ai/Yi model, further pre-trained with an extended vocabulary and Korean/English corpus, supporting bilingual text generation in Korean and English.
Large Language Model Transformers Supports Multiple Languages
Y
beomi
3,183
37
IF PromptMKR Phi
A version fine-tuned using the IFprompMKR dataset with qlora on the microsoft/phi-1_5 model, primarily for text generation tasks.
Large Language Model Transformers
I
impactframes
23
2
Heron Preliminary Git Llama 2 70b V0
A vision-language model pre-trained on image-text pairs, based on Llama-2 70B architecture, suitable for image caption generation tasks.
Image-to-Text Transformers Japanese
H
turing-motors
14
1
Llava Pretrain Vicuna 7b V1.3
LLaVA is an open-source multimodal chatbot, fine-tuned on GPT-generated multimodal instruction-following data based on LLaMA/Vicuna.
Text-to-Image Transformers
L
liuhaotian
54
1
Llama2 Xs 460M Experimental
This series of repositories open-sources reproductions of Meta AI's LLaMA and LLaMA 2 large language models, but with significantly reduced model sizes. The llama1_s experimental version contains 1.8 billion parameters, while the llama2_xs experimental version has only 460 million parameters.
Large Language Model Transformers English
L
ahxt
145
13
Opt 2.7b
Other
OPT is an open-source large language model series launched by Meta AI, with parameter scales ranging from 125 million to 175 billion, aimed at promoting open research in large-scale language models.
Large Language Model English
O
facebook
53.87k
83
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase